Conversation
This commit introduces a comprehensive plugin configuration UI with the following enhancements: - Add PluginSettingsForm component for plugin configuration - Create modular plugin configuration components: * EnvVarsSection - manage plugin environment variables * FormField - reusable form field component * OrchestrationSection - configure plugin orchestration settings * PluginConfigPanel - main plugin configuration panel * PluginListSidebar - plugin list and navigation - Update plugin service to support new configuration endpoints - Enhance system controller and routes for plugin management - Update Plugins page with new UI components - Enhance API service with plugin configuration methods
…ist feature - Updated Docker Compose configuration to include mock streaming services for testing. - Introduced `always_persist` flag in audio stream and conversation management, ensuring audio is saved even if transcription fails. - Enhanced conversation model to track processing status and persist audio data, improving reliability in audio handling. - Added integration tests to verify the functionality of the always_persist feature, ensuring audio is correctly stored in various scenarios. - Improved logging for audio processing and conversation state transitions to facilitate debugging and monitoring.
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the
📝 WalkthroughWalkthroughThis PR introduces comprehensive audio persistence and plugin management features, including always-persist audio recording, structured plugin configuration with metadata discovery, OpenMemory MCP integration with metadata-based isolation, transcription fallback to batch mode, conversation versioning with numeric indices, new web UI components for plugin and miscellaneous settings management, and extensive test infrastructure including mock LLM/STT servers and integration test suite. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Audio Client
participant WS as WebSocket Controller
participant Audio as Audio Job
participant Conv as Conversation Job
participant Redis as Redis
participant DB as Database
participant TJ as Transcription Job
Client->>WS: audio_start (always_persist=true)
WS->>Audio: enqueue audio_persistence_job
WS->>WS: initialize_streaming_session()
Audio->>Redis: check conversation:current:session_id
Redis-->>Audio: null (first time)
alt always_persist_enabled = true
Audio->>DB: create placeholder conversation
DB-->>Audio: conversation_id
Audio->>Redis: SET conversation:current:session_id → conversation_id (TTL: 1hr)
Audio->>Audio: log placeholder creation
else always_persist_enabled = false
Audio->>Audio: log waiting for speech detection
end
Client->>WS: audio chunks
WS->>Audio: accumulate chunks
loop Every 30 minutes of audio
WS->>WS: _process_rolling_batch()
WS->>Conv: create batch conversation
WS->>TJ: enqueue batch transcription job
end
WS->>TJ: stream_speech_detection_job
TJ->>TJ: detect speech
alt speech detected
TJ->>Conv: process with conversation_id from Redis
Conv->>DB: update placeholder status to 'pending_transcription'
else no speech detected
TJ->>Conv: mark placeholder as 'transcription_failed'
TJ->>TJ: enqueue transcription_fallback_check_job
end
TJ->>TJ: transcription_fallback_check_job
TJ->>TJ: wait for batch transcription
TJ->>Conv: finalize conversation
sequenceDiagram
participant Admin as Admin User
participant UI as Plugin UI
participant API as System API
participant PS as Plugin Service
participant PC as Plugin Component
participant Disk as File System
Admin->>UI: navigate to Plugins
UI->>API: GET /admin/plugins/metadata
API->>PS: get_plugin_metadata(plugin_id)
PS->>Disk: load orchestration.yml
PS->>Disk: load plugin config.yml
PS->>Disk: load schema.yml (if exists)
PS->>PS: infer_schema_from_config()
PS->>PS: mask_secrets_in_config()
PS-->>API: plugin metadata with schema
API-->>UI: metadata payload
UI->>UI: render PluginSettingsForm
UI->>UI: load plugin in PluginConfigPanel
Admin->>UI: edit config values
Admin->>UI: click Test Connection
UI->>API: POST /admin/plugins/test-connection/plugin_id
API->>PC: test_connection(config)
PC->>PC: validate required fields
PC->>PC: attempt connection
PC-->>API: success/failure response
API-->>UI: test result
Admin->>UI: click Save
UI->>API: POST /admin/plugins/config/structured/plugin_id
API->>PS: update_plugin_config_structured()
PS->>Disk: backup existing config
PS->>Disk: update plugins.yml
PS->>Disk: update plugin/config.yml
PS->>Disk: update .env file
PS-->>API: success response with restart indicator
API-->>UI: save confirmation
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
| Metric | Count |
|---|---|
| ✅ Passed | 80 |
| ❌ Failed | 12 |
| 📊 Total | 92 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-no-api - HTML reports
- robot-test-results-xml-no-api - XML output
- Introduced a new YAML configuration file to simulate transcription failures using invalid API keys for Deepgram services. - Configured both standard and streaming speech-to-text models with invalid credentials to facilitate testing of error handling in audio processing. - Enhanced the testing framework by providing mock models for LLM and embeddings, ensuring comprehensive coverage of failure scenarios.
- Updated logging levels for transcription errors to use error severity, providing clearer insights into issues. - Distinguish between transcription service failures and legitimate no speech scenarios in session termination logs. - Enhanced session failure messages to guide users in checking transcription service configurations.
- Introduced functions to retrieve and save miscellaneous settings, including `always_persist_enabled` and `use_provider_segments`, using OmegaConf. - Updated the system controller and routes to handle new endpoints for managing miscellaneous settings, ensuring admin access control. - Refactored audio processing jobs to read the `always_persist_enabled` setting from global configuration, improving audio persistence behavior. - Enhanced the web UI to allow administrators to view and modify miscellaneous settings, providing better control over audio processing features. - Added integration tests to verify the functionality of the new settings management, ensuring robust handling of audio persistence scenarios.
- Updated the Makefile to introduce new test commands for running tests with and without API keys, improving CI integration. - Refactored integration tests to replace static sleep calls with polling mechanisms for conversation creation, enhancing reliability and reducing flakiness. - Added a new keyword to wait for conversations by client ID, streamlining test logic and improving readability. - Updated documentation in the Makefile to reflect changes in test commands and configurations.
…nality - Added an asynchronous function to initialize and register an OpenMemory user if the OpenMemory MCP provider is configured, improving user management. - Enhanced the MCPClient to accept custom metadata when adding memories, allowing for better tracking and filtering of memories by user. - Updated the OpenMemoryMCPService to utilize the configured OpenMemory user for memory operations, ensuring accurate user context in memory processing. - Modified integration tests to use shorter device names for consistency and to avoid truncation issues, improving test reliability.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
- Created Dockerfile for a mock LLM server, including dependencies and configuration for running the server on port 11435. - Created Dockerfile for a mock streaming STT server, including dependencies and configuration for running the server on port 9999. - Both Dockerfiles streamline the setup process for testing related functionalities.
There was a problem hiding this comment.
Actionable comments posted: 20
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🤖 Fix all issues with AI agents
In `@backends/advanced/src/advanced_omi_backend/app_factory.py`:
- Around line 45-88: The initialize_openmemory_user function performs blocking
network operations during startup; change its invocation so it runs in the
background instead of being awaited synchronously: locate the startup code that
calls initialize_openmemory_user and replace the direct await with scheduling it
via asyncio.create_task(initialize_openmemory_user()) (or use loop.create_task)
so the app doesn't block; alternatively, if you prefer time-bounding, add a
short per-call timeout around MCPClient.test_connection / add_memories (e.g.,
asyncio.wait_for) inside initialize_openmemory_user to limit how long MCPClient
(used in the async with block and methods test_connection, add_memories,
delete_memory) can hang. Ensure you keep build_memory_config_from_env and
MemoryProvider checks unchanged and preserve logging behavior when running as a
background task.
In `@backends/advanced/src/advanced_omi_backend/controllers/system_controller.py`:
- Around line 1243-1306: In test_plugin_connection, env vars from the passed-in
config are only added to test_config separately and not used to expand
placeholders inside settings; update test_plugin_connection so you first build
test_config from config['settings'], then merge provided env_vars (handling
masked values by reading os.getenv when value == '••••••••••••' and lower-casing
keys) into that settings dict before calling expand_env_vars(test_config), so
any ${ENV_VAR} placeholders in settings resolve using the supplied env_vars;
keep using plugin_class.test_connection for the call and preserve
logging/exception behavior.
In `@backends/advanced/src/advanced_omi_backend/plugins/homeassistant/plugin.py`:
- Around line 646-692: The test_connection block creates a HAMCPClient
(mcp_client) but never closes its internal httpx.AsyncClient; wrap the usage of
mcp_client in a try/finally (or async with) around the calls to _render_template
and get_all_entities, and ensure you call await mcp_client.close() in the
finally so the client is always closed even on errors; update the function
(test_connection) to reference mcp_client, _render_template, get_all_entities,
and close as the cleanup point.
- Around line 667-672: The code calls a non-existent method
mcp_client.get_all_entities() which will raise at runtime; change the call to
mcp_client.discover_entities() and compute entity_count from the returned
dictionary (e.g., entity_count = len(entities_dict)) while keeping the existing
try/except behavior; update references to the symbols mcp_client,
get_all_entities(), discover_entities(), and entity_count so the plugin uses
discover_entities()'s dict result instead of the removed API.
In `@backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py`:
- Around line 75-84: The current response exposes job.exc_info (full stack
trace) when status == "failed"; replace this with a short safe summary (e.g.,
"error_summary" containing the exception message or the first line of
job.exc_info) for regular users and only include the full exc_info when the
requester is an admin—check a known admin flag (e.g., current_user.is_admin or
request.user.is_admin) before adding "exc_info" to the response; update the
block that builds response (where response = {"job_id": job.id, "status":
status} and the following failed-handling code) to produce a sanitized
"error_message"/"error_summary" and conditionally add full "exc_info" for admins
only.
In
`@backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py`:
- Around line 121-131: The info log currently prints the full payload (payload
variable) including sensitive fields like text and user metadata; update the
logging in memory creation to redact PII by logging only safe fields (e.g.,
self.user_id, self.client_name/app, server_url) plus text length and/or metadata
keys rather than values, and move the full payload or a redacted copy to
memory_logger.debug instead of memory_logger.info; specifically change the
memory_logger.info call that references payload to instead log a concise message
with self.user_id, self.client_name, the length of text (len(text)), and a
redacted/summary of default_metadata, and use memory_logger.debug for any full
or redacted payload dump.
In
`@backends/advanced/src/advanced_omi_backend/services/memory/providers/openmemory_mcp.py`:
- Around line 204-223: Several methods still change MCP user context or perform
operations by the configured OpenMemory user instead of consistently using
metadata-based scoping; update get_memory, update_memory,
delete_all_user_memories (and any helper that switches MCP user) to perform
operations under the shared OpenMemory user but enforce chronicle_user_id
metadata checks: for get_memory, fetch the memory via MCP and verify
result["metadata"].get("chronicle_user_id")==user_id before returning; for
update_memory, first retrieve and validate metadata ownership, then apply
updates only if ownership matches; for delete_all_user_memories, search for
memories with metadata["chronicle_user_id"]==user_id (use the same search/filter
pattern and _mcp_result_to_memory_entry) and delete each matched memory by id
rather than switching user context or bulk-deleting across the configured user.
Ensure every path that touches memories verifies metadata ownership by
chronicle_user_id.
- Around line 145-159: The memory_logger.info call currently includes raw
user_email PII; remove or redact the email from the log message and instead log
non-PII identifiers (e.g., user_id, source_id, client_id) only, while still
passing user_email in the metadata for self.mcp_client.add_memories if needed;
update the memory_logger.info invocation in openmemory_mcp.py (the block that
builds enriched_transcript, metadata and calls self.mcp_client.add_memories) to
exclude or mask user_email.
In `@backends/advanced/src/advanced_omi_backend/services/plugin_service.py`:
- Around line 344-436: mask_secrets_in_config currently only masks values that
still contain a ${ENV_VAR} placeholder (uses extract_env_var_name), so when
load_plugin_config has already expanded placeholders to real secrets those
values leak; update mask_secrets_in_config to additionally iterate secret env
names from config_schema['env_vars'] and for each secret env_var compare its
os.environ value to each string config value (or detect exact match) and replace
matching values with the mask (and empty string if env not set), keeping the
existing placeholder-based masking logic intact; references:
mask_secrets_in_config, extract_env_var_name, get_plugin_metadata (calls
load_plugin_config -> mask_secrets_in_config), config_schema and
schema['env_vars'] for locating secret env var names.
In `@backends/advanced/src/advanced_omi_backend/workers/audio_jobs.py`:
- Around line 105-110: The Redis TTL for the conversation key (set via
redis_client.set(conversation_key, conversation.conversation_id, ex=3600)) is
too short for long-running jobs; either extend the expiry to the job's maximum
runtime or refresh the key periodically while the job is active. Update the call
to use a constant/MAX runtime (e.g., MAX_JOB_RUNTIME_SECONDS) or add a
background refresh loop that calls redis_client.expire(conversation_key,
new_ttl) (or resets redis_client.set with the same value) from within the audio
job/task function that owns conversation_key and conversation.conversation_id so
the key never expires mid-session.
In `@backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py`:
- Around line 777-791: The polling loop masks failed RQ jobs because it checks
batch_job.is_finished before batch_job.is_failed; change the logic in the loop
that polls batch_job (uses batch_job.refresh(), batch_job.is_finished,
batch_job.is_failed, batch_job.exc_info, timeout_seconds, waited, max_wait) to
check batch_job.is_failed first and immediately raise an Exception including
batch_job.exc_info when true, otherwise handle is_finished success as before;
keep the polling sleep/wait logic and timeout behavior unchanged.
In `@backends/advanced/webui/src/components/plugins/FormField.tsx`:
- Around line 4-15: FieldSchema declares type 'array' but renderField() lacks a
case for 'array', causing arrays to be coerced to strings via the default
onChange(e.target.value); either implement an explicit array renderer in
renderField() that renders a proper list input (e.g., add a case 'array' that
parses/updates arrays via JSON or multiple inputs and calls onChange with an
actual array) or remove 'array' from the FieldSchema union so the type cannot be
produced; update any callers that rely on plugin_service.py's array schemas to
match the chosen approach.
- Around line 89-156: The switch case for 'password' contains a lexical
declaration (const displayValue) which violates the noSwitchDeclarations rule;
wrap the entire case 'password' contents in a block (add { ... } after case
'password':) so displayValue and related declarations (isMaskedValue, isEditing,
setIsEditing, onChange, showPassword, etc.) are block-scoped, preserving
behavior of the input, onFocus handler, show/hide button, and conditional help
text.
In `@backends/advanced/webui/src/components/plugins/OrchestrationSection.tsx`:
- Around line 71-101: The toggle is currently a non-focusable clickable div (in
OrchestrationSection) and the label's htmlFor points to no input; replace or
refactor the toggle into an actual accessible control: render a hidden
checkbox/input (id referenced by the label) or use a <button> with type="button"
that maps to handleEnabledChange, add keyboard handlers (onKeyDown to toggle on
Space/Enter) and proper ARIA state (aria-checked reflecting config.enabled,
role="switch" or role="checkbox"), and ensure the element uses disabled to set
aria-disabled and visually indicates focus (focus-visible styles) so screen
readers and keyboard users can operate the toggle; update references to
config.enabled, disabled, and handleEnabledChange accordingly.
In `@backends/advanced/webui/src/components/plugins/PluginListSidebar.tsx`:
- Around line 132-155: The toggle in PluginListSidebar is implemented as a
non-focusable <label> with an onClick, so screen readers and keyboard users
cannot properly interact with it; replace it with a real focusable control
(e.g., a button or an input[type="checkbox"]) that calls
onToggleEnabled(plugin.plugin_id, !plugin.enabled), exposes aria-checked (or
checked) and role if needed, includes accessible text/aria-label reflecting
Enabled/Disabled, and keeps the same visual classes so behavior and styling
remain intact; update any onClick handlers on the wrapper to use
onKeyDown/keyboard activation patterns if you keep a custom element.
- Around line 99-110: The plugin row currently rendered as a clickable <div> is
not keyboard-accessible; replace or update the element used in PluginListSidebar
(the block keyed by plugin.plugin_id and invoking onSelectPlugin) so it is a
semantic interactive control: use a <button> (or add role="button", tabIndex=0)
and wire keyboard handling (activate on Enter/Space via onKeyDown) and add
appropriate ARIA state (e.g., aria-pressed or aria-selected when isSelected) and
an accessible label so keyboard and assistive tech can select plugins; ensure
existing onClick/onSelectPlugin calls are preserved and visual focus styles
remain.
In `@backends/advanced/webui/src/components/PluginSettingsForm.tsx`:
- Around line 98-119: The current initialization in PluginSettingsForm creates a
fresh config that overwrites any existing plugin configuration; instead hydrate
from existing values: initialize orchestration using plugin.orchestration or
plugin.config?.orchestration (preserving enabled, events, condition) and for
settings/env_vars populate each key by first checking
plugin.config?.settings[key] (or plugin.config?.env_vars[key]) then falling back
to schema.value or schema.default; ensure you only use defaults when no current
value exists and keep existing events/condition intact (refer to the config
object, orchestration, settings, env_vars, and plugin.config_schema to locate
fields).
- Around line 166-171: The connection test is sending masked secret values from
currentConfig.env_vars which causes false failures; update handleTestConnection
to reuse the same filtering used in handleSave (use selectedPluginId,
currentConfig.env_vars and originalPluginConfig.env_vars) — build a filtered
env_vars object that skips keys whose value is the mask (e.g., '••••' or
startsWith masked dots) or that are unchanged from
originalPluginConfig.env_vars, then pass that filtered env_vars to
systemApi.testPluginConnection so masked secrets are not sent to the backend.
In `@backends/advanced/webui/src/hooks/useSimpleAudioRecording.ts`:
- Around line 231-262: The server-error handler calls cleanup() and flips state
but startRecording can still continue; add an abort ref (e.g., abortedRef or
startAbortRef) that the error handler sets to true when a server error is
received, then update startRecording to check that ref after any await points
and before setting setCurrentStep or setIsRecording so the flow bails out if
aborted; ensure cleanup() is still called once and the same abort guard is added
in the other startRecording path referenced in the comment (the second
startRecording block) so both flows respect the server-error abort.
In `@tests/configs/mock-services.yml`:
- Around line 49-55: The mock STT service entry named "mock-stt" has model_url
set to "http://localhost:9999", which is inconsistent with other mock services
and fails from inside Docker; update the model_url value for the "mock-stt"
service (the YAML block with name: mock-stt and model_type: stt) to use
"http://host.docker.internal:9999" so containerized tests can reach the
host-based mock STT batch service.
🟡 Minor comments (7)
backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py-753-760 (1)
753-760: Add defensive checks before accessingclient_stateattributes in_handle_batch_mode_audio.Lines 751-753 access
client_state.user_id,client_state.user_email, andclient_state.client_idwithout validation. While these are set during theaudio-startmessage handler (lines 1342-1344) and the state machine enforcesaudio-startbeforeaudio-chunkprocessing can occur, defensive checks usinghasattrorgetattrwould guard against protocol violations or future refactoring that might bypass the current ordering guarantee.tests/resources/redis_keywords.robot-123-155 (1)
123-155: Make key selection deterministic when multiple keys match.
KEYSoutput order isn't guaranteed; picking the first match can lead to flaky assertions when multiple suffix keys exist. Sort the keys and, whenexpected_conversation_idis provided, prefer the key that matches the expected value instead of failing on the first key's mismatch.💡 Suggested refinement (deterministic selection + expected value match)
@@ - @{keys}= Split String ${result.stdout} \n - ${keys_list}= Evaluate [k for k in ${keys} if k.strip()] - ${num_keys}= Get Length ${keys_list} + @{keys}= Split String ${result.stdout} \n + ${keys_list}= Evaluate [k for k in ${keys} if k.strip()] + Sort List ${keys_list} + ${num_keys}= Get Length ${keys_list} @@ - ${redis_key}= Get From List ${keys_list} 0 - Log Found Redis key: ${redis_key} - - # Get the conversation_id value - ${conversation_id}= Redis Command GET ${redis_key} - - # Optionally verify it matches expected value - IF '${expected_conversation_id}' != '${None}' - Should Be Equal As Strings ${conversation_id} ${expected_conversation_id} - ... Redis key value mismatch: expected ${expected_conversation_id}, got ${conversation_id} - END + ${redis_key}= Get From List ${keys_list} -1 + ${conversation_id}= Redis Command GET ${redis_key} + + # If expected value provided, prefer the key that matches it + IF '${expected_conversation_id}' != '${None}' + FOR ${candidate} IN @{keys_list} + ${candidate_value}= Redis Command GET ${candidate} + IF '${candidate_value}' == '${expected_conversation_id}' + ${redis_key}= Set Variable ${candidate} + ${conversation_id}= Set Variable ${candidate_value} + Exit For Loop + END + END + Should Be Equal As Strings ${conversation_id} ${expected_conversation_id} + ... Redis key value mismatch: expected ${expected_conversation_id}, got ${conversation_id} + END + Log Found Redis key: ${redis_key}backends/advanced/webui/src/pages/System.tsx-178-189 (1)
178-189: Avoid misc-settings calls for non-admin users.
/api/misc-settingsis admin-only; calling it whenisAdminis false will just generate 403s and console noise.✅ Minimal guard
const loadMiscSettings = async () => { + if (!isAdmin) return try { setMiscLoading(true) const response = await systemApi.getMiscSettings()backends/advanced/src/advanced_omi_backend/workers/audio_jobs.py-122-124 (1)
122-124: Remove unused f-string prefix.
Ruff F541: no interpolation is used here.🔧 Quick fix
- logger.info(f"🔍 always_persist=False - will wait for speech detection to create conversation") + logger.info("🔍 always_persist=False - will wait for speech detection to create conversation")backends/advanced/webui/src/components/ConversationVersionDropdown.tsx-140-143 (1)
140-143: Guard againstfindIndexreturning -1 (showsv0).If the active id isn’t found,
findIndexyields-1, resulting in a misleadingv0. Add a guard (and consider a nullish check for numeric fallback).🛠️ Suggested fix
+ const formatActiveVersionLabel = ( + versions: Array<TranscriptVersion | MemoryVersion>, + activeId: string, + fallbackNumber?: number + ) => { + const idx = versions.findIndex(v => v.version_id === activeId) + if (idx >= 0) return `v${idx + 1}` + return fallbackNumber != null ? `v${fallbackNumber}` : '-' + } ... - Transcript: {versionHistory ? - `v${versionHistory.transcript_versions.findIndex(v => v.version_id === versionHistory.active_transcript_version) + 1}` : - (versionInfo?.active_transcript_version_number ? `v${versionInfo.active_transcript_version_number}` : '-') - } + Transcript: {versionHistory + ? formatActiveVersionLabel( + versionHistory.transcript_versions, + versionHistory.active_transcript_version, + versionInfo?.active_transcript_version_number + ) + : (versionInfo?.active_transcript_version_number != null ? `v${versionInfo.active_transcript_version_number}` : '-') + } ... - Memory: {versionHistory ? - `v${versionHistory.memory_versions.findIndex(v => v.version_id === versionHistory.active_memory_version) + 1}` : - (versionInfo?.active_memory_version_number ? `v${versionInfo.active_memory_version_number}` : '-') - } + Memory: {versionHistory + ? formatActiveVersionLabel( + versionHistory.memory_versions, + versionHistory.active_memory_version, + versionInfo?.active_memory_version_number + ) + : (versionInfo?.active_memory_version_number != null ? `v${versionInfo.active_memory_version_number}` : '-') + }Also applies to: 202-205
backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py-64-64 (1)
64-64: Makemetadataexplicitly optional.Implicit Optional is flagged by type checkers/ruff. Use
Optional[Dict[str, Any]](orDict[str, Any] | None) to satisfy PEP 484.Optionalis already imported in the file.♻️ Suggested fix
- async def add_memories(self, text: str, metadata: Dict[str, Any] = None) -> List[str]: + async def add_memories(self, text: str, metadata: Optional[Dict[str, Any]] = None) -> List[str]:tests/Makefile-244-259 (1)
244-259: Ensure API-key test target starts containers with the real config.Line 244-259 runs tests without ensuring containers are started with
deepgram-openai.yml. Iftest-no-apiwas run previously, this target will still hit mock services. Consider restarting containers with the real config in this target.🛠️ Suggested adjustment
test-with-api-keys: `@echo` "🧪 Running tests that require API keys..." `@if` [ -z "$$DEEPGRAM_API_KEY" ] || [ -z "$$OPENAI_API_KEY" ]; then \ echo "❌ Error: DEEPGRAM_API_KEY and OPENAI_API_KEY must be set"; \ echo " export DEEPGRAM_API_KEY='your-key-here'"; \ echo " export OPENAI_API_KEY='your-key-here'"; \ exit 1; \ fi + @$(MAKE) containers-stop + `@TEST_CONFIG_FILE`=/app/test-configs/deepgram-openai.yml $(MAKE) containers-start CREATE_FIXTURE=true uv run --with-requirements test-requirements.txt robot --outputdir $(OUTPUTDIR) \ --name "API Key Tests" \ --console verbose \ --loglevel INFO:INFO \ --include requires-api-keys \ $(TEST_DIR)
🧹 Nitpick comments (21)
backends/advanced/src/advanced_omi_backend/services/transcription/mock_provider.py (1)
54-56: Unusedaudio_durationvariable.The
audio_durationis calculated but never used. Consider either removing it or using it to dynamically scale word timestamps based on actual audio length.🧹 Suggested fix
- # Calculate audio duration from bytes (assuming 16-bit PCM) - audio_duration = len(audio_data) / (sample_rate * 2) # 2 bytes per sample - # Return a mock transcript with word-level timestampsAlternatively, if you want dynamic timing based on audio duration:
# Calculate audio duration from bytes (assuming 16-bit PCM) audio_duration = len(audio_data) / (sample_rate * 2) # 2 bytes per sample + # Scale factor to spread words across actual audio duration + scale = audio_duration / 6.5 if audio_duration > 0 else 1.0backends/advanced/src/advanced_omi_backend/services/transcription/__init__.py (1)
73-77: Hardcodedfail_mode=Falselimits test flexibility.The mock provider path always creates
MockTranscriptionProvider(fail_mode=False), which means tests usingRegistryBatchTranscriptionProviderwithmodel_provider="mock"cannot trigger failure scenarios. Thefail_modeparameter is only accessible throughget_mock_transcription_provider().If failure testing through the registry path is needed, consider reading fail_mode from configuration or environment.
💡 Optional: Support fail_mode through config or environment
# Special handling for mock provider (no HTTP server needed) if self.model.model_provider == "mock": from .mock_provider import MockTranscriptionProvider - mock = MockTranscriptionProvider(fail_mode=False) + # Allow fail_mode to be configured for testing scenarios + import os + fail_mode = os.getenv("MOCK_TRANSCRIPTION_FAIL_MODE", "false").lower() == "true" + mock = MockTranscriptionProvider(fail_mode=fail_mode) return await mock.transcribe(audio_data, sample_rate, diarize)backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py (3)
844-888: Improve exception handling in streaming validation.Several exception handling issues:
- Line 873:
raise ValueError(error_msg)inside try block is caught by the except on line 877- Line 878: Use
logging.exceptioninstead oflogging.errorto capture stack trace- Line 880: Missing
from errorfrom Noneto distinguish from exception handling errorsThe control flow with catching
ValueErrorto re-raise it is convoluted.♻️ Suggested refactor for cleaner error handling
await websocket.send_json(error_response) application_logger.info(f"📤 Sent streaming error to WebUI client {client_id}") # Close the websocket connection after sending error await websocket.close(code=1008, reason="Streaming transcription not configured") application_logger.info(f"🔌 Closed WebSocket connection for {client_id} due to streaming config error") - # Raise ValueError to exit the handler completely - raise ValueError(error_msg) - except ValueError: - # Re-raise ValueError to exit handler - raise + # Return with error indication to exit handler + raise ValueError(error_msg) from None except Exception as e: - application_logger.error(f"Failed to send error to client: {e}") - # Still raise ValueError to exit handler - raise ValueError(error_msg) + application_logger.exception(f"Failed to send error to client: {e}") + raise ValueError(error_msg) from e
937-958: Unused parameteruser_emailand f-string without placeholders.The
user_emailparameter is never used in_process_rolling_batch. Additionally, line 957 has an f-string without placeholders.🧹 Suggested fix
async def _process_rolling_batch( client_state, user_id: str, - user_email: str, client_id: str, batch_number: int ) -> None: """ Process accumulated batch audio as a rolling segment. Creates conversation titled "Recording Part {batch_number}" and enqueues transcription. Args: client_state: Client state with batch_audio_chunks user_id: User ID - user_email: User email client_id: Client ID batch_number: Sequential batch number (1, 2, 3...) """ if not hasattr(client_state, 'batch_audio_chunks') or not client_state.batch_audio_chunks: - application_logger.warning(f"⚠️ No audio chunks to process for rolling batch") + application_logger.warning("⚠️ No audio chunks to process for rolling batch") returnAnd update the call site at line 754-760:
await _process_rolling_batch( client_state, user_id=client_state.user_id, - user_email=client_state.user_email, client_id=client_state.client_id, batch_number=client_state.batch_chunks_processed + 1 )
986-994: Unusednum_chunksvariable.The result of
convert_audio_to_chunksis assigned but never used.🧹 Suggested fix
# Convert to MongoDB chunks - num_chunks = await convert_audio_to_chunks( + await convert_audio_to_chunks( conversation_id=conversation_id, audio_data=complete_audio, sample_rate=sample_rate, channels=channels, sample_width=width )backends/advanced/docker-compose-test.yml (2)
141-141: Consider making CUDA version overrideable for non‑GPU setups.
An env‑level override keeps the profile usable on machines without compatible drivers.♻️ Possible tweak
args: - PYTORCH_CUDA_VERSION: cu12.6 + PYTORCH_CUDA_VERSION: ${PYTORCH_CUDA_VERSION:-cu12.6}
168-193: Optional: gate mock services behind a profile.
If these mocks are only needed for specific suites, a profile can keep default runs lighter.♻️ Example
mock-streaming-stt: build: context: ../.. dockerfile: tests/Dockerfile.mock-streaming-stt @@ restart: unless-stopped + profiles: + - mocks @@ mock-llm: build: context: ../.. dockerfile: tests/Dockerfile.mock-llm @@ restart: unless-stopped + profiles: + - mockstests/resources/system_keywords.robot (1)
44-81: Avoid fixed sleep for mock server readiness.
A hardSleep 2scan be flaky or wasteful; prefer polling for port readiness or a log line before proceeding.backends/advanced/webui/src/components/plugins/EnvVarsSection.tsx (1)
66-77: Minor: Verify the env_var display renders correctly.The template literal
${fieldSchema.env_var}on line 69 will render as the literal string (e.g.,$DEEPGRAM_API_KEY). If the intent is to show just the variable name without the$prefix, consider removing it. If displaying the shell-style variable reference is intentional, this is fine as-is.tests/resources/conversation_keywords.robot (1)
36-48: Naming: Consider renaming toValidate Conversation Count By Client ID.The keyword name suggests it waits/polls for a conversation, but it actually performs a single check and asserts. The actual waiting is delegated to
Wait Until Keyword Succeedsin the calling tests. While this pattern works, the name could be clearer:
- Current:
Wait For Conversation By Client ID(implies polling)- Suggested:
Validate Conversation Count By Client IDorAssert Conversation Exists By Client IDThis is a minor documentation concern since the docstring does mention "Polls until..." but the keyword itself doesn't implement polling.
tests/integration/always_persist_audio_tests.robot (1)
41-45: Consider: Fixed sleep in Test Cleanup may cause flakiness or slow tests.The 2-second sleep allows the backend to finalize processing, but this could be either too short (causing flakiness if processing takes longer) or unnecessarily slow. Consider if there's a status to poll for, or if this delay is acceptable given test reliability requirements.
tests/libs/mock_streaming_stt_server.py (2)
152-153: Uselogging.exceptionfor better error context.When catching exceptions,
logging.exceptionautomatically includes the stack trace, providing better debugging information.Proposed fix
except json.JSONDecodeError: - logger.error(f"Invalid JSON from {client_id}: {message}") + logger.exception(f"Invalid JSON from {client_id}: {message}")
167-169: Remove extraneous f-string prefixes.These strings have no placeholders, so the
fprefix is unnecessary.Proposed fix
logger.info(f"Starting Mock Streaming STT Server on {host}:{port}") - logger.info(f"Deepgram-compatible nested response format") - logger.info(f"Speech detection: >2.0s duration, >5 words") + logger.info("Deepgram-compatible nested response format") + logger.info("Speech detection: >2.0s duration, >5 words")backends/advanced/src/advanced_omi_backend/clients/audio_stream_client.py (2)
155-159: Remove debug print statements.These
print()statements duplicate thelogger.info()calls and appear to be debug artifacts. Using only the logger is more appropriate for production code as it respects log levels and configuration.Proposed fix
- print(f"🔵 CLIENT: Sending audio-start message: {header}") logger.info(f"🔵 CLIENT: Sending audio-start message: {header}") await self.ws.send(json.dumps(header) + "\n") - print(f"✅ CLIENT: Sent audio-start with mode={recording_mode}, always_persist={always_persist}") logger.info(f"✅ CLIENT: Sent audio-start with mode={recording_mode}, always_persist={always_persist}")
464-466: Uselogging.exceptionfor better error context.When logging in an exception handler,
logging.exceptionautomatically includes the stack trace, which aids debugging.Proposed fix
except Exception as e: session.error = str(e) - logger.error(f"❌ CLIENT: Stream {stream_id} failed to start: {e}") + logger.exception(f"❌ CLIENT: Stream {stream_id} failed to start: {e}")backends/advanced/src/advanced_omi_backend/utils/logging_utils.py (2)
94-97: Consider masking empty string secrets.The condition
if is_secret and valuewon't mask falsy values like empty strings (""),0, orFalse. While this might be intentional (no need to mask empty values), it could leak information that a secret field is empty vs having a value.If you want to mask all secret fields regardless of value:
♻️ Suggested change
- if is_secret and value: + if is_secret and value is not None:
254-256: Inconsistent case sensitivity for secret attribute matching.
create_masked_repruses case-sensitive matching (key in secret_attrs), whilemask_dictuses case-insensitive matching (key.lower() in secret_fields_lower). This inconsistency could lead to secrets not being masked if the casing doesn't match exactly.♻️ Suggested fix for consistency
+ secret_attrs_lower = {attr.lower() for attr in secret_attrs} + for key in dir(obj): # Skip private/magic attributes and methods if key.startswith('_') or callable(getattr(obj, key)): continue value = getattr(obj, key) # Mask secret attributes - if key in secret_attrs: + if key.lower() in secret_attrs_lower: value_repr = f"'{mask}'"backends/advanced/src/advanced_omi_backend/plugins/email_summarizer/email_service.py (1)
173-183: Add exception chaining to preserve stack trace.The detailed error messages are helpful, but without exception chaining (
from e), the original traceback is lost, making debugging harder. This is flagged by Ruff B904.♻️ Suggested fix
except smtplib.SMTPAuthenticationError as e: - raise Exception(f"SMTP Authentication failed for {self.username}. Check credentials. For Gmail, use an App Password instead of your regular password. Error: {str(e)}") + raise Exception(f"SMTP Authentication failed for {self.username}. Check credentials. For Gmail, use an App Password instead of your regular password. Error: {e!s}") from e except smtplib.SMTPConnectError as e: - raise Exception(f"Failed to connect to SMTP server {self.host}:{self.port}. Check host and port. Error: {str(e)}") + raise Exception(f"Failed to connect to SMTP server {self.host}:{self.port}. Check host and port. Error: {e!s}") from e except smtplib.SMTPServerDisconnected as e: - raise Exception(f"SMTP server disconnected unexpectedly. Check TLS settings (port 587 needs TLS, port 465 needs SSL). Error: {str(e)}") + raise Exception(f"SMTP server disconnected unexpectedly. Check TLS settings (port 587 needs TLS, port 465 needs SSL). Error: {e!s}") from e except TimeoutError as e: - raise Exception(f"Connection to {self.host}:{self.port} timed out. Check firewall/network settings. Error: {str(e)}") + raise Exception(f"Connection to {self.host}:{self.port} timed out. Check firewall/network settings. Error: {e!s}") from e except Exception as e: - raise Exception(f"SMTP connection test failed: {type(e).__name__}: {str(e)}") + raise Exception(f"SMTP connection test failed: {type(e).__name__}: {e!s}") from etests/libs/mock_llm_server.py (2)
302-308: Remove extraneous f-string prefixes.Lines 303-308 use f-strings but contain no placeholders. Per Ruff F541, these should be plain strings.
♻️ Suggested fix
logger.info(f"Starting Mock LLM Server on {host}:{port}") - logger.info(f"OpenAI-compatible endpoints:") - logger.info(f" - POST /v1/chat/completions") - logger.info(f" - POST /v1/embeddings") - logger.info(f" - GET /v1/models") - logger.info(f" - GET /health") - logger.info(f"Deterministic embeddings: 1536 dimensions") + logger.info("OpenAI-compatible endpoints:") + logger.info(" - POST /v1/chat/completions") + logger.info(" - POST /v1/embeddings") + logger.info(" - GET /v1/models") + logger.info(" - GET /health") + logger.info("Deterministic embeddings: 1536 dimensions")
256-282: Consider prefixing unusedrequestparameter.The
requestparameter is required by aiohttp's handler signature but unused inhandle_modelsandhandle_health. Prefixing with_indicates intentional non-use and silences the linter.♻️ Suggested fix
-async def handle_models(request: web.Request) -> web.Response: +async def handle_models(_request: web.Request) -> web.Response: """Handle /v1/models endpoint.""" ... -async def handle_health(request: web.Request) -> web.Response: +async def handle_health(_request: web.Request) -> web.Response: """Handle health check endpoint.""" ...backends/advanced/src/advanced_omi_backend/config.py (1)
207-231: Avoid partial writes when saving misc settings.If one section saves and the other fails, the call returns
Falsebut state is already partially updated. Consider a single atomic save or returning per-section status/rollback details.
| async def initialize_openmemory_user() -> None: | ||
| """Initialize and register OpenMemory user if using OpenMemory MCP provider. | ||
|
|
||
| This function: | ||
| - Checks if OpenMemory MCP is configured as the memory provider | ||
| - Registers the configured user with OpenMemory server | ||
| - Creates a test memory and deletes it to trigger user creation | ||
| - Logs success or warning if OpenMemory is not reachable | ||
| """ | ||
| from advanced_omi_backend.services.memory.config import build_memory_config_from_env, MemoryProvider | ||
|
|
||
| memory_provider_config = build_memory_config_from_env() | ||
|
|
||
| if memory_provider_config.memory_provider != MemoryProvider.OPENMEMORY_MCP: | ||
| return | ||
|
|
||
| try: | ||
| from advanced_omi_backend.services.memory.providers.mcp_client import MCPClient | ||
|
|
||
| # Get configured user_id and server_url | ||
| openmemory_config = memory_provider_config.openmemory_config | ||
| user_id = openmemory_config.get("user_id", "openmemory") if openmemory_config else "openmemory" | ||
| server_url = openmemory_config.get("server_url", "http://host.docker.internal:8765") if openmemory_config else "http://host.docker.internal:8765" | ||
| client_name = openmemory_config.get("client_name", "chronicle") if openmemory_config else "chronicle" | ||
|
|
||
| application_logger.info(f"Registering OpenMemory user: {user_id} at {server_url}") | ||
|
|
||
| # Make a lightweight registration call (create and delete dummy memory) | ||
| async with MCPClient(server_url=server_url, client_name=client_name, user_id=user_id) as client: | ||
| # Test connection first | ||
| is_connected = await client.test_connection() | ||
| if is_connected: | ||
| # Create and immediately delete a dummy memory to trigger user creation | ||
| memory_ids = await client.add_memories("Chronicle initialization - user registration test") | ||
| if memory_ids: | ||
| # Delete the test memory | ||
| await client.delete_memory(memory_ids[0]) | ||
| application_logger.info(f"✅ Registered OpenMemory user: {user_id}") | ||
| else: | ||
| application_logger.warning(f"⚠️ OpenMemory MCP not reachable at {server_url}") | ||
| application_logger.info("User will be auto-created on first memory operation") | ||
| except Exception as e: | ||
| application_logger.warning(f"⚠️ Could not register OpenMemory user: {e}") | ||
| application_logger.info("User will be auto-created on first memory operation") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check MCPClient for timeout and test_connection behavior
rg -n "class MCPClient|def test_connection|timeout" backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py -A 5Repository: SimpleOpenSoftware/chronicle
Length of output: 1689
🏁 Script executed:
# Check where initialize_openmemory_user is called
rg -n "initialize_openmemory_user" backends/advanced/src/advanced_omi_backend/app_factory.py -B 5 -A 5Repository: SimpleOpenSoftware/chronicle
Length of output: 1052
🏁 Script executed:
# Check if there's any background task or await pattern
grep -n "await initialize_openmemory_user\|asyncio.create_task\|asyncio.ensure_future" backends/advanced/src/advanced_omi_backend/app_factory.pyRepository: SimpleOpenSoftware/chronicle
Length of output: 114
🏁 Script executed:
# Get test_connection full implementation
rg -n "async def test_connection" backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py -A 15Repository: SimpleOpenSoftware/chronicle
Length of output: 851
🏁 Script executed:
# Check if MCPClient calls are sequential or concurrent
rg -n "test_connection\|add_memories\|delete_memory" backends/advanced/src/advanced_omi_backend/app_factory.py -B 2 -A 2Repository: SimpleOpenSoftware/chronicle
Length of output: 54
🏁 Script executed:
# Get full test_connection implementation
rg -n "async def test_connection" backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py -A 25Repository: SimpleOpenSoftware/chronicle
Length of output: 1283
🏁 Script executed:
# Check the initialization sequence in app_factory
sed -n '45,95p' backends/advanced/src/advanced_omi_backend/app_factory.pyRepository: SimpleOpenSoftware/chronicle
Length of output: 2689
Move OpenMemory user registration to background task to prevent startup blocking.
This function is directly awaited during startup and makes sequential network calls with 30-second timeouts (up to 90 seconds if OpenMemory is unreachable). Use asyncio.create_task() or equivalent to defer this initialization, or implement a shorter timeout. This contradicts the stated intent in the preceding comment about avoiding startup blocking.
🧰 Tools
🪛 Ruff (0.14.14)
86-86: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In `@backends/advanced/src/advanced_omi_backend/app_factory.py` around lines 45 -
88, The initialize_openmemory_user function performs blocking network operations
during startup; change its invocation so it runs in the background instead of
being awaited synchronously: locate the startup code that calls
initialize_openmemory_user and replace the direct await with scheduling it via
asyncio.create_task(initialize_openmemory_user()) (or use loop.create_task) so
the app doesn't block; alternatively, if you prefer time-bounding, add a short
per-call timeout around MCPClient.test_connection / add_memories (e.g.,
asyncio.wait_for) inside initialize_openmemory_user to limit how long MCPClient
(used in the async with block and methods test_connection, add_memories,
delete_memory) can hang. Ensure you keep build_memory_config_from_env and
MemoryProvider checks unchanged and preserve logging behavior when running as a
background task.
| async def test_plugin_connection(plugin_id: str, config: dict) -> dict: | ||
| """Test plugin connection/configuration without saving. | ||
|
|
||
| Calls the plugin's test_connection method if available to validate | ||
| configuration (e.g., SMTP connection, Home Assistant API). | ||
|
|
||
| Args: | ||
| plugin_id: Plugin identifier | ||
| config: Configuration to test (same structure as update_plugin_config_structured) | ||
|
|
||
| Returns: | ||
| Test result with success status and details | ||
| """ | ||
| try: | ||
| from advanced_omi_backend.services.plugin_service import discover_plugins, expand_env_vars | ||
|
|
||
| # Validate plugin exists | ||
| discovered_plugins = discover_plugins() | ||
| if plugin_id not in discovered_plugins: | ||
| raise ValueError(f"Plugin '{plugin_id}' not found") | ||
|
|
||
| plugin_class = discovered_plugins[plugin_id] | ||
|
|
||
| # Check if plugin supports testing | ||
| if not hasattr(plugin_class, 'test_connection'): | ||
| return { | ||
| "success": False, | ||
| "message": f"Plugin '{plugin_id}' does not support connection testing", | ||
| "status": "unsupported" | ||
| } | ||
|
|
||
| # Build complete config from provided data | ||
| test_config = {} | ||
|
|
||
| # Merge settings | ||
| if 'settings' in config: | ||
| test_config.update(config['settings']) | ||
|
|
||
| # Add env vars (expand any ${ENV_VAR} references with test values) | ||
| if 'env_vars' in config: | ||
| for key, value in config['env_vars'].items(): | ||
| # Skip masked values | ||
| if value == '••••••••••••': | ||
| # Use actual env var value | ||
| value = os.getenv(key, '') | ||
| test_config[key.lower()] = value | ||
|
|
||
| # Expand any remaining env var references | ||
| test_config = expand_env_vars(test_config) | ||
|
|
||
| # Call plugin's test_connection static method | ||
| result = await plugin_class.test_connection(test_config) | ||
|
|
||
| logger.info(f"Test connection for '{plugin_id}': {result.get('message', 'No message')}") | ||
|
|
||
| return result | ||
|
|
||
| except Exception as e: | ||
| logger.exception(f"Error testing connection for plugin '{plugin_id}'") | ||
| return { | ||
| "success": False, | ||
| "message": f"Connection test failed: {str(e)}", | ||
| "status": "error" | ||
| } |
There was a problem hiding this comment.
Provided env vars won’t actually populate settings during test.
env_vars are only lower‑cased into test_config, so ${ENV_VAR} placeholders in settings won’t resolve unless the env var is already in the process environment. That can cause false negatives for newly supplied secrets (e.g., HA_TOKEN → home_assistant_token mismatch). Consider expanding settings using the provided env values instead.
🛠️ Proposed fix (expand placeholders with provided env vars)
- # Build complete config from provided data
- test_config = {}
-
- # Merge settings
- if 'settings' in config:
- test_config.update(config['settings'])
-
- # Add env vars (expand any ${ENV_VAR} references with test values)
- if 'env_vars' in config:
- for key, value in config['env_vars'].items():
- # Skip masked values
- if value == '••••••••••••':
- # Use actual env var value
- value = os.getenv(key, '')
- test_config[key.lower()] = value
-
- # Expand any remaining env var references
- test_config = expand_env_vars(test_config)
+ # Build complete config from provided data
+ test_config = {}
+
+ # Merge settings
+ if 'settings' in config:
+ test_config.update(config['settings'])
+
+ # Expand settings using provided env var overrides (without persisting them)
+ original_env = {}
+ for key, value in config.get('env_vars', {}).items():
+ if value == '••••••••••••':
+ continue
+ original_env[key] = os.getenv(key)
+ os.environ[key] = value
+ try:
+ test_config = expand_env_vars(test_config)
+ finally:
+ for key, old in original_env.items():
+ if old is None:
+ os.environ.pop(key, None)
+ else:
+ os.environ[key] = old🧰 Tools
🪛 Ruff (0.14.14)
1262-1262: Abstract raise to an inner function
(TRY301)
1262-1262: Avoid specifying long messages outside the exception class
(TRY003)
1298-1298: Consider moving this statement to an else block
(TRY300)
1304-1304: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
In `@backends/advanced/src/advanced_omi_backend/controllers/system_controller.py`
around lines 1243 - 1306, In test_plugin_connection, env vars from the passed-in
config are only added to test_config separately and not used to expand
placeholders inside settings; update test_plugin_connection so you first build
test_config from config['settings'], then merge provided env_vars (handling
masked values by reading os.getenv when value == '••••••••••••' and lower-casing
keys) into that settings dict before calling expand_env_vars(test_config), so
any ${ENV_VAR} placeholders in settings resolve using the supplied env_vars;
keep using plugin_class.test_connection for the call and preserve
logging/exception behavior.
| # Create temporary MCP client | ||
| mcp_client = HAMCPClient( | ||
| base_url=ha_url, | ||
| token=ha_token, | ||
| timeout=timeout | ||
| ) | ||
|
|
||
| # Test API connectivity with Template API | ||
| logger.info(f"Testing Home Assistant API connection to {ha_url}...") | ||
| start_time = time.time() | ||
|
|
||
| test_result = await mcp_client._render_template("{{ 1 + 1 }}") | ||
| connection_time_ms = int((time.time() - start_time) * 1000) | ||
|
|
||
| if str(test_result).strip() != "2": | ||
| return { | ||
| "success": False, | ||
| "message": f"Unexpected template result: {test_result}", | ||
| "status": "error" | ||
| } | ||
|
|
||
| # Try to fetch entities count for additional info | ||
| try: | ||
| entities = await mcp_client.get_all_entities() | ||
| entity_count = len(entities) | ||
| except Exception: | ||
| entity_count = None | ||
|
|
||
| return { | ||
| "success": True, | ||
| "message": f"Successfully connected to Home Assistant at {ha_url}", | ||
| "status": "success", | ||
| "details": { | ||
| "ha_url": ha_url, | ||
| "connection_time_ms": connection_time_ms, | ||
| "entity_count": entity_count, | ||
| "api_test": "Template rendering successful" | ||
| } | ||
| } | ||
|
|
||
| except Exception as e: | ||
| logger.error(f"Home Assistant connection test failed: {e}", exc_info=True) | ||
| return { | ||
| "success": False, | ||
| "message": f"Connection test failed: {str(e)}", | ||
| "status": "error" | ||
| } |
There was a problem hiding this comment.
Missing resource cleanup: MCP client not closed after test.
The HAMCPClient creates an internal httpx.AsyncClient that should be closed after use. The test_connection method creates a client but never calls await mcp_client.close(), which could lead to resource leaks if this method is called repeatedly.
♻️ Suggested fix using try/finally
# Create temporary MCP client
mcp_client = HAMCPClient(
base_url=ha_url,
token=ha_token,
timeout=timeout
)
- # Test API connectivity with Template API
- logger.info(f"Testing Home Assistant API connection to {ha_url}...")
- start_time = time.time()
-
- test_result = await mcp_client._render_template("{{ 1 + 1 }}")
- connection_time_ms = int((time.time() - start_time) * 1000)
-
- if str(test_result).strip() != "2":
- return {
- "success": False,
- "message": f"Unexpected template result: {test_result}",
- "status": "error"
- }
-
- # Try to fetch entities count for additional info
try:
- entities = await mcp_client.get_all_entities()
- entity_count = len(entities)
- except Exception:
- entity_count = None
-
- return {
- "success": True,
- "message": f"Successfully connected to Home Assistant at {ha_url}",
- "status": "success",
- "details": {
- "ha_url": ha_url,
- "connection_time_ms": connection_time_ms,
- "entity_count": entity_count,
- "api_test": "Template rendering successful"
+ # Test API connectivity with Template API
+ logger.info(f"Testing Home Assistant API connection to {ha_url}...")
+ start_time = time.time()
+
+ test_result = await mcp_client._render_template("{{ 1 + 1 }}")
+ connection_time_ms = int((time.time() - start_time) * 1000)
+
+ if str(test_result).strip() != "2":
+ return {
+ "success": False,
+ "message": f"Unexpected template result: {test_result}",
+ "status": "error"
+ }
+
+ # Try to fetch entities count for additional info
+ try:
+ entities = await mcp_client.fetch_entity_states()
+ entity_count = len(entities)
+ except Exception:
+ entity_count = None
+
+ return {
+ "success": True,
+ "message": f"Successfully connected to Home Assistant at {ha_url}",
+ "status": "success",
+ "details": {
+ "ha_url": ha_url,
+ "connection_time_ms": connection_time_ms,
+ "entity_count": entity_count,
+ "api_test": "Template rendering successful"
+ }
}
- }
+ finally:
+ await mcp_client.close()🧰 Tools
🪛 Ruff (0.14.14)
671-671: Do not catch blind exception: Exception
(BLE001)
674-684: Consider moving this statement to an else block
(TRY300)
690-690: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
In `@backends/advanced/src/advanced_omi_backend/plugins/homeassistant/plugin.py`
around lines 646 - 692, The test_connection block creates a HAMCPClient
(mcp_client) but never closes its internal httpx.AsyncClient; wrap the usage of
mcp_client in a try/finally (or async with) around the calls to _render_template
and get_all_entities, and ensure you call await mcp_client.close() in the
finally so the client is always closed even on errors; update the function
(test_connection) to reference mcp_client, _render_template, get_all_entities,
and close as the cleanup point.
| # Try to fetch entities count for additional info | ||
| try: | ||
| entities = await mcp_client.get_all_entities() | ||
| entity_count = len(entities) | ||
| except Exception: | ||
| entity_count = None |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for get_all_entities method definition
rg -n "def get_all_entities|async def get_all_entities" --type py
# Also search for where HAMCPClient is defined to understand its structure
rg -n "class HAMCPClient" --type py -A 20Repository: SimpleOpenSoftware/chronicle
Length of output: 2448
🏁 Script executed:
#!/bin/bash
# Get the full HAMCPClient class definition to see all available methods
rg -n "class HAMCPClient" --type py -A 200 backends/advanced/src/advanced_omi_backend/plugins/homeassistant/mcp_client.py | head -250
# Also specifically search for any method containing "entities"
rg -n "def.*entities|async def.*entities" --type pyRepository: SimpleOpenSoftware/chronicle
Length of output: 7781
🏁 Script executed:
#!/bin/bash
# Get full view of HAMCPClient methods, especially discover_entities
rg -n "async def discover_entities|async def fetch_area_entities" --type py -A 10 backends/advanced/src/advanced_omi_backend/plugins/homeassistant/mcp_client.pyRepository: SimpleOpenSoftware/chronicle
Length of output: 764
Fix method call on line 669: get_all_entities() does not exist.
mcp_client.get_all_entities() will fail at runtime—the method doesn't exist on HAMCPClient. The exception handler silently catches the AttributeError and sets entity_count to None, making this dead code.
Replace with discover_entities(), which returns a dictionary of available entities:
♻️ Suggested fix
try:
- entities = await mcp_client.get_all_entities()
+ entities = await mcp_client.discover_entities()
entity_count = len(entities)
except Exception:
entity_count = None🧰 Tools
🪛 Ruff (0.14.14)
671-671: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In `@backends/advanced/src/advanced_omi_backend/plugins/homeassistant/plugin.py`
around lines 667 - 672, The code calls a non-existent method
mcp_client.get_all_entities() which will raise at runtime; change the call to
mcp_client.discover_entities() and compute entity_count from the returned
dictionary (e.g., entity_count = len(entities_dict)) while keeping the existing
try/except behavior; update references to the symbols mcp_client,
get_all_entities(), discover_entities(), and entity_count so the plugin uses
discover_entities()'s dict result instead of the removed API.
| response = { | ||
| "job_id": job.id, | ||
| "status": status | ||
| } | ||
|
|
||
| # Include error information for failed jobs | ||
| if status == "failed" and job.exc_info: | ||
| response["error_message"] = str(job.exc_info) | ||
| response["exc_info"] = str(job.exc_info) | ||
|
|
There was a problem hiding this comment.
Avoid exposing full stack traces to non‑admins.
exc_info can leak internal paths or secrets. Consider returning a short error summary for regular users and reserving full traces for admins.
🔒 Safer response pattern
response = {
"job_id": job.id,
"status": status
}
# Include error information for failed jobs
if status == "failed" and job.exc_info:
- response["error_message"] = str(job.exc_info)
- response["exc_info"] = str(job.exc_info)
+ error_text = str(job.exc_info)
+ # Provide a short summary to non-admins
+ response["error_message"] = (error_text.splitlines()[-1] if error_text else "Job failed")[:500]
+ # Full trace only for admins
+ if current_user.is_superuser:
+ response["exc_info"] = error_text🤖 Prompt for AI Agents
In `@backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py`
around lines 75 - 84, The current response exposes job.exc_info (full stack
trace) when status == "failed"; replace this with a short safe summary (e.g.,
"error_summary" containing the exception message or the first line of
job.exc_info) for regular users and only include the full exc_info when the
requester is an admin—check a known admin flag (e.g., current_user.is_admin or
request.user.is_admin) before adding "exc_info" to the response; update the
block that builds response (where response = {"job_id": job.id, "status":
status} and the following failed-handling code) to produce a sanitized
"error_message"/"error_summary" and conditionally add full "exc_info" for admins
only.
| <label | ||
| className="flex items-center space-x-2 cursor-pointer" | ||
| onClick={(e) => { | ||
| e.stopPropagation() | ||
| onToggleEnabled(plugin.plugin_id, !plugin.enabled) | ||
| }} | ||
| > | ||
| <span className="text-xs text-gray-600 dark:text-gray-400"> | ||
| {plugin.enabled ? 'Enabled' : 'Disabled'} | ||
| </span> | ||
| <div | ||
| className={` | ||
| relative inline-flex h-5 w-9 items-center rounded-full transition-colors | ||
| ${plugin.enabled ? 'bg-blue-600' : 'bg-gray-300 dark:bg-gray-600'} | ||
| `} | ||
| > | ||
| <span | ||
| className={` | ||
| inline-block h-4 w-4 transform rounded-full bg-white transition-transform | ||
| ${plugin.enabled ? 'translate-x-5' : 'translate-x-0.5'} | ||
| `} | ||
| /> | ||
| </div> | ||
| </label> |
There was a problem hiding this comment.
Use a real, focusable toggle control.
The current toggle is a <label> with an onClick handler but no associated control, so it isn’t focusable or announced correctly to assistive tech.
♿ Proposed fix
- <label
- className="flex items-center space-x-2 cursor-pointer"
- onClick={(e) => {
- e.stopPropagation()
- onToggleEnabled(plugin.plugin_id, !plugin.enabled)
- }}
- >
+ <button
+ type="button"
+ role="switch"
+ aria-checked={plugin.enabled}
+ className="flex items-center space-x-2 cursor-pointer"
+ onClick={(e) => {
+ e.stopPropagation()
+ onToggleEnabled(plugin.plugin_id, !plugin.enabled)
+ }}
+ >
<span className="text-xs text-gray-600 dark:text-gray-400">
{plugin.enabled ? 'Enabled' : 'Disabled'}
</span>
<div
className={`
relative inline-flex h-5 w-9 items-center rounded-full transition-colors
${plugin.enabled ? 'bg-blue-600' : 'bg-gray-300 dark:bg-gray-600'}
`}
>
<span
className={`
inline-block h-4 w-4 transform rounded-full bg-white transition-transform
${plugin.enabled ? 'translate-x-5' : 'translate-x-0.5'}
`}
/>
</div>
- </label>
+ </button>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <label | |
| className="flex items-center space-x-2 cursor-pointer" | |
| onClick={(e) => { | |
| e.stopPropagation() | |
| onToggleEnabled(plugin.plugin_id, !plugin.enabled) | |
| }} | |
| > | |
| <span className="text-xs text-gray-600 dark:text-gray-400"> | |
| {plugin.enabled ? 'Enabled' : 'Disabled'} | |
| </span> | |
| <div | |
| className={` | |
| relative inline-flex h-5 w-9 items-center rounded-full transition-colors | |
| ${plugin.enabled ? 'bg-blue-600' : 'bg-gray-300 dark:bg-gray-600'} | |
| `} | |
| > | |
| <span | |
| className={` | |
| inline-block h-4 w-4 transform rounded-full bg-white transition-transform | |
| ${plugin.enabled ? 'translate-x-5' : 'translate-x-0.5'} | |
| `} | |
| /> | |
| </div> | |
| </label> | |
| <button | |
| type="button" | |
| role="switch" | |
| aria-checked={plugin.enabled} | |
| className="flex items-center space-x-2 cursor-pointer" | |
| onClick={(e) => { | |
| e.stopPropagation() | |
| onToggleEnabled(plugin.plugin_id, !plugin.enabled) | |
| }} | |
| > | |
| <span className="text-xs text-gray-600 dark:text-gray-400"> | |
| {plugin.enabled ? 'Enabled' : 'Disabled'} | |
| </span> | |
| <div | |
| className={` | |
| relative inline-flex h-5 w-9 items-center rounded-full transition-colors | |
| ${plugin.enabled ? 'bg-blue-600' : 'bg-gray-300 dark:bg-gray-600'} | |
| `} | |
| > | |
| <span | |
| className={` | |
| inline-block h-4 w-4 transform rounded-full bg-white transition-transform | |
| ${plugin.enabled ? 'translate-x-5' : 'translate-x-0.5'} | |
| `} | |
| /> | |
| </div> | |
| </button> |
🤖 Prompt for AI Agents
In `@backends/advanced/webui/src/components/plugins/PluginListSidebar.tsx` around
lines 132 - 155, The toggle in PluginListSidebar is implemented as a
non-focusable <label> with an onClick, so screen readers and keyboard users
cannot properly interact with it; replace it with a real focusable control
(e.g., a button or an input[type="checkbox"]) that calls
onToggleEnabled(plugin.plugin_id, !plugin.enabled), exposes aria-checked (or
checked) and role if needed, includes accessible text/aria-label reflecting
Enabled/Disabled, and keeps the same visual classes so behavior and styling
remain intact; update any onClick handlers on the wrapper to use
onKeyDown/keyboard activation patterns if you keep a custom element.
| // Extract current configuration from plugin metadata | ||
| const config: PluginConfig = { | ||
| orchestration: { | ||
| enabled: plugin.enabled || false, | ||
| events: [], | ||
| condition: { type: 'always' } | ||
| }, | ||
| settings: {}, | ||
| env_vars: {} | ||
| } | ||
|
|
||
| // Load settings with defaults | ||
| Object.keys(plugin.config_schema.settings || {}).forEach((key) => { | ||
| const schema = plugin.config_schema.settings[key] | ||
| config.settings[key] = schema.default ?? '' | ||
| }) | ||
|
|
||
| // Load env vars (will be masked values from backend) | ||
| Object.keys(plugin.config_schema.env_vars || {}).forEach((key) => { | ||
| const schema = plugin.config_schema.env_vars[key] | ||
| config.env_vars[key] = schema.value ?? '' | ||
| }) |
There was a problem hiding this comment.
Avoid overwriting existing plugin configuration with defaults on load.
Line 99-113 initializes orchestration/events/condition and settings from defaults only. If metadata already includes current values, saving from this UI will reset user changes. Prefer hydrating from existing values (e.g., schema.value or orchestration values) or fetch current config before editing.
🛠️ Suggested adjustment
- const config: PluginConfig = {
- orchestration: {
- enabled: plugin.enabled || false,
- events: [],
- condition: { type: 'always' }
- },
- settings: {},
- env_vars: {}
- }
+ const orchestration = plugin.config_schema.orchestration || {}
+ const config: PluginConfig = {
+ orchestration: {
+ enabled: plugin.enabled || false,
+ events: orchestration.events ?? [],
+ condition: orchestration.condition ?? { type: 'always' }
+ },
+ settings: {},
+ env_vars: {}
+ }
...
- config.settings[key] = schema.default ?? ''
+ config.settings[key] = schema.value ?? schema.default ?? ''📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Extract current configuration from plugin metadata | |
| const config: PluginConfig = { | |
| orchestration: { | |
| enabled: plugin.enabled || false, | |
| events: [], | |
| condition: { type: 'always' } | |
| }, | |
| settings: {}, | |
| env_vars: {} | |
| } | |
| // Load settings with defaults | |
| Object.keys(plugin.config_schema.settings || {}).forEach((key) => { | |
| const schema = plugin.config_schema.settings[key] | |
| config.settings[key] = schema.default ?? '' | |
| }) | |
| // Load env vars (will be masked values from backend) | |
| Object.keys(plugin.config_schema.env_vars || {}).forEach((key) => { | |
| const schema = plugin.config_schema.env_vars[key] | |
| config.env_vars[key] = schema.value ?? '' | |
| }) | |
| // Extract current configuration from plugin metadata | |
| const orchestration = plugin.config_schema.orchestration || {} | |
| const config: PluginConfig = { | |
| orchestration: { | |
| enabled: plugin.enabled || false, | |
| events: orchestration.events ?? [], | |
| condition: orchestration.condition ?? { type: 'always' } | |
| }, | |
| settings: {}, | |
| env_vars: {} | |
| } | |
| // Load settings with defaults | |
| Object.keys(plugin.config_schema.settings || {}).forEach((key) => { | |
| const schema = plugin.config_schema.settings[key] | |
| config.settings[key] = schema.value ?? schema.default ?? '' | |
| }) | |
| // Load env vars (will be masked values from backend) | |
| Object.keys(plugin.config_schema.env_vars || {}).forEach((key) => { | |
| const schema = plugin.config_schema.env_vars[key] | |
| config.env_vars[key] = schema.value ?? '' | |
| }) |
🤖 Prompt for AI Agents
In `@backends/advanced/webui/src/components/PluginSettingsForm.tsx` around lines
98 - 119, The current initialization in PluginSettingsForm creates a fresh
config that overwrites any existing plugin configuration; instead hydrate from
existing values: initialize orchestration using plugin.orchestration or
plugin.config?.orchestration (preserving enabled, events, condition) and for
settings/env_vars populate each key by first checking
plugin.config?.settings[key] (or plugin.config?.env_vars[key]) then falling back
to schema.value or schema.default; ensure you only use defaults when no current
value exists and keep existing events/condition intact (refer to the config
object, orchestration, settings, env_vars, and plugin.config_schema to locate
fields).
| try { | ||
| const response = await systemApi.testPluginConnection(selectedPluginId, { | ||
| orchestration: currentConfig.orchestration, | ||
| settings: currentConfig.settings, | ||
| env_vars: currentConfig.env_vars | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -t f "PluginSettingsForm.tsx" backends/advanced/webui/src/components/Repository: SimpleOpenSoftware/chronicle
Length of output: 133
🏁 Script executed:
cat -n "backends/advanced/webui/src/components/PluginSettingsForm.tsx" | sed -n '160,180p'Repository: SimpleOpenSoftware/chronicle
Length of output: 851
🏁 Script executed:
cat -n "backends/advanced/webui/src/components/PluginSettingsForm.tsx" | sed -n '1,50p'Repository: SimpleOpenSoftware/chronicle
Length of output: 2021
🏁 Script executed:
wc -l "backends/advanced/webui/src/components/PluginSettingsForm.tsx"Repository: SimpleOpenSoftware/chronicle
Length of output: 137
🏁 Script executed:
cat -n "backends/advanced/webui/src/components/PluginSettingsForm.tsx" | grep -i -E "save|mask|••••" -A 3 -B 3Repository: SimpleOpenSoftware/chronicle
Length of output: 2020
🏁 Script executed:
cat -n "backends/advanced/webui/src/components/PluginSettingsForm.tsx" | sed -n '100,200p'Repository: SimpleOpenSoftware/chronicle
Length of output: 3671
Filter masked secrets before connection tests.
The handleTestConnection method (lines 166-171) sends env_vars without filtering masked values, but the handleSave method already implements the correct pattern (lines 200-208) that skips sending unchanged secrets. When env vars contain masked values (••••), sending them verbatim to the backend will cause the connection test to fail unexpectedly. Reuse the same filtering logic from handleSave:
🛠️ Suggested adjustment
try {
+ const envVarsToSend: Record<string, any> = {}
+ Object.keys(currentConfig.env_vars).forEach((key) => {
+ const value = currentConfig.env_vars[key]
+ if (typeof value !== 'string' || !value.includes('••••')) {
+ envVarsToSend[key] = value
+ }
+ })
+
const response = await systemApi.testPluginConnection(selectedPluginId, {
orchestration: currentConfig.orchestration,
settings: currentConfig.settings,
- env_vars: currentConfig.env_vars
+ env_vars: Object.keys(envVarsToSend).length > 0 ? envVarsToSend : undefined
})🤖 Prompt for AI Agents
In `@backends/advanced/webui/src/components/PluginSettingsForm.tsx` around lines
166 - 171, The connection test is sending masked secret values from
currentConfig.env_vars which causes false failures; update handleTestConnection
to reuse the same filtering used in handleSave (use selectedPluginId,
currentConfig.env_vars and originalPluginConfig.env_vars) — build a filtered
env_vars object that skips keys whose value is the mask (e.g., '••••' or
startsWith masked dots) or that are unchanged from
originalPluginConfig.env_vars, then pass that filtered env_vars to
systemApi.testPluginConnection so masked secrets are not sent to the backend.
|
|
||
| // Parse server messages | ||
| try { | ||
| const message = JSON.parse(event.data) | ||
|
|
||
| // Handle error messages from backend | ||
| if (message.type === 'error') { | ||
| const errorMsg = message.message || 'Unknown error from server' | ||
| console.error('❌ Server error:', errorMsg) | ||
|
|
||
| setError(errorMsg) | ||
| setCurrentStep('error') | ||
| setDebugStats(prev => ({ | ||
| ...prev, | ||
| lastError: errorMsg, | ||
| lastErrorTime: new Date() | ||
| })) | ||
|
|
||
| // Stop recording and cleanup | ||
| cleanup() | ||
| setIsRecording(false) | ||
| } | ||
|
|
||
| // Handle other message types (interim_transcript, etc.) | ||
| else if (message.type === 'interim_transcript') { | ||
| console.log('📝 Received interim transcript:', message.data) | ||
| } | ||
|
|
||
| } catch (e) { | ||
| // Not JSON, ignore | ||
| console.log('📨 Non-JSON message:', event.data) | ||
| } |
There was a problem hiding this comment.
Abort the startRecording flow when a server error arrives.
The new server-error handler calls cleanup() and flips state, but startRecording can still proceed and set currentStep/isRecording afterward. That can leave the UI in “recording” while resources are already torn down. Add an abort ref and check it between awaited steps.
✅ Suggested fix (guard start flow on server error)
@@
const processorRef = useRef<ScriptProcessorNode | null>(null)
const durationIntervalRef = useRef<ReturnType<typeof setInterval>>()
const keepAliveIntervalRef = useRef<ReturnType<typeof setInterval>>()
const chunkCountRef = useRef(0)
const audioProcessingStartedRef = useRef(false)
+ const serverErrorRef = useRef<string | null>(null)
@@
if (message.type === 'error') {
const errorMsg = message.message || 'Unknown error from server'
console.error('❌ Server error:', errorMsg)
+ serverErrorRef.current = errorMsg
@@
const startRecording = useCallback(async () => {
try {
+ serverErrorRef.current = null
setError(null)
setCurrentStep('mic')
@@
const stream = await getMicrophoneAccess()
+ if (serverErrorRef.current) return
@@
const ws = await connectWebSocket()
+ if (serverErrorRef.current) return
@@
await sendAudioStartMessage(ws)
+ if (serverErrorRef.current) return
@@
await startAudioStreaming(stream, ws)
+ if (serverErrorRef.current) returnAlso applies to: 412-435
🤖 Prompt for AI Agents
In `@backends/advanced/webui/src/hooks/useSimpleAudioRecording.ts` around lines
231 - 262, The server-error handler calls cleanup() and flips state but
startRecording can still continue; add an abort ref (e.g., abortedRef or
startAbortRef) that the error handler sets to true when a server error is
received, then update startRecording to check that ref after any await points
and before setting setCurrentStep or setIsRecording so the flow bails out if
aborted; ensure cleanup() is still called once and the same abort guard is added
in the other startRecording path referenced in the comment (the second
startRecording block) so both flows respect the server-error abort.
| - api_family: mock | ||
| api_key: mock-key-not-used | ||
| description: Mock STT for testing (batch) | ||
| model_provider: mock | ||
| model_type: stt | ||
| model_url: http://localhost:9999 | ||
| name: mock-stt |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and read the file mentioned in the review
find . -name "mock-services.yml" -type f | head -5Repository: SimpleOpenSoftware/chronicle
Length of output: 105
🏁 Script executed:
# Look at the structure and size of the file
if [ -f "tests/configs/mock-services.yml" ]; then
wc -l tests/configs/mock-services.yml
fiRepository: SimpleOpenSoftware/chronicle
Length of output: 106
🏁 Script executed:
# Read the entire mock-services.yml file to understand the context
if [ -f "tests/configs/mock-services.yml" ]; then
cat -n tests/configs/mock-services.yml
fiRepository: SimpleOpenSoftware/chronicle
Length of output: 3300
🏁 Script executed:
# Search for other references to mock-services or localhost in config/test files
rg "localhost.*9999|host\.docker\.internal" --type yaml -A 2 -B 2Repository: SimpleOpenSoftware/chronicle
Length of output: 4131
🏁 Script executed:
# Check if there's a Docker Compose or similar setup file
fd -e "docker-compose" -e "Dockerfile" -e "compose.yml" -e "compose.yaml" | head -10Repository: SimpleOpenSoftware/chronicle
Length of output: 54
🏁 Script executed:
# Look for any documentation or comments about mock service setup
rg "mock.*stt|localhost.*9999" --type-list | head -20Repository: SimpleOpenSoftware/chronicle
Length of output: 461
🏁 Script executed:
# Search for where mock-services.yml is referenced or used
rg "mock-services\.yml" -A 3 -B 3Repository: SimpleOpenSoftware/chronicle
Length of output: 3828
Fix inconsistent mock STT URL for Docker container networking.
Line 54 uses http://localhost:9999, but all other mock services in this file use host.docker.internal (see lines 27, 37, and 73 for mock-llm, mock-embed, and mock-stt-stream respectively). From inside containers, localhost refers to the container itself, not the host. Change to host.docker.internal to match the pattern and ensure the batch STT service is reachable during testing.
Fix
- model_url: http://localhost:9999
+ model_url: http://host.docker.internal:9999📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - api_family: mock | |
| api_key: mock-key-not-used | |
| description: Mock STT for testing (batch) | |
| model_provider: mock | |
| model_type: stt | |
| model_url: http://localhost:9999 | |
| name: mock-stt | |
| - api_family: mock | |
| api_key: mock-key-not-used | |
| description: Mock STT for testing (batch) | |
| model_provider: mock | |
| model_type: stt | |
| model_url: http://host.docker.internal:9999 | |
| name: mock-stt |
🤖 Prompt for AI Agents
In `@tests/configs/mock-services.yml` around lines 49 - 55, The mock STT service
entry named "mock-stt" has model_url set to "http://localhost:9999", which is
inconsistent with other mock services and fails from inside Docker; update the
model_url value for the "mock-stt" service (the YAML block with name: mock-stt
and model_type: stt) to use "http://host.docker.internal:9999" so containerized
tests can reach the host-based mock STT batch service.
|
| Metric | Count |
|---|---|
| ✅ Passed | 87 |
| ❌ Failed | 8 |
| 📊 Total | 95 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-no-api - HTML reports
- robot-test-results-xml-no-api - XML output
|
| Metric | Count |
|---|---|
| ✅ Passed | 119 |
| ❌ Failed | 5 |
| 📊 Total | 124 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-pr-labeled - HTML reports
- robot-test-results-xml-pr-labeled - XML output
|
| Metric | Count |
|---|---|
| ✅ Passed | 87 |
| ❌ Failed | 8 |
| 📊 Total | 95 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-no-api - HTML reports
- robot-test-results-xml-no-api - XML output
|
| Metric | Count |
|---|---|
| ✅ Passed | 120 |
| ❌ Failed | 4 |
| 📊 Total | 124 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-pr-labeled - HTML reports
- robot-test-results-xml-pr-labeled - XML output
Summary by CodeRabbit
Release Notes
New Features
UI Enhancements
Bug Fixes
Testing
✏️ Tip: You can customize this high-level summary in your review settings.